7 research outputs found

    Task Delegability to AI: Evaluation of a Framework in a Knowledge Work Context

    Get PDF
    With the increased research focus on ways to use AI for augmentation rather than automation of knowledge-intensive work, a myriad of questions on how this should be accomplished arises. To break down the complexity of Human-AI collaboration, this paper pursues the identification of factors that contribute to the delegation of tasks to AI in such a setting, and consequently gain insights into requirements for meaningful task allocation. To address this research gap, we carried out an empirical study on an existing task delegability framework in a knowledge work context. We employed several statistical approaches such as confirmatory factor analysis, linear regression, and analysis of covariance. Results show that an adapted framework with fewer factors fits the data better. As for the framework factors, we show that the factor trust predicts delegability best. Furthermore, we find a significant impact of task on delegability decision. Finally, we derive theoretical and design implications

    HUMAN-AI COLLABORATION IN CONCEPTUALIZING DESIGN SCIENCE RESEARCH STUDIES: PERCEIVED HELPFULNESS OF GENERATIVE LANGUAGE MODEL\u27S SUGGESTIONS

    Get PDF
    Solving complex problems was named a new challenge for research on human-AI collaboration. In our study we focus on a particular means to solving complex problems: design science research (DSR). We investigate whether AI, more specifically, generative language models (GLM), can support an individual in conceptualizing DSR studies by making helpful suggestions. To do so we use extracts of a published DSR study and have GPT-3, a GLM provide suggestions for aspects of this study. These suggestions are then evaluated in a survey (n=33) regarding their helpfulness. Results show that GLM suggestions are perceived to be helpful, with some variation depending on expertise. Reported interest in using such a tool in the future was high. Describing how GLM can offer helpful suggestions we contribute toward a DSR tool support ecosystem and, more generally, towards knowledge on how humans and (generative) AI systems can team up to solve complex problems

    Conversational Agent as a Black Hat: Can Criticising Improve Idea Generation?

    Get PDF
    The Ideate phase of Design Thinking is the source of many idea creations. In this context, criticism is considered a creativity killer, yet recent studies show that criticism can be beneficial. An example of this is the black hat of one creativity method: Six Thinking Hats. It points out the weaknesses of an idea so that they are eliminated by further refining. Previous research shows that conversational agents have an advantage over humans when criticizing because of their perceived neutrality. To investigate this, we developed and implemented a conversational agent and evaluated it using an A/B test. The results of the study show that the prototype is perceived as less neutral when it criticizes. Criticizing by the conversational agent can lead to higher quality ideas. This work contributes to a better understanding of conversational agents in the black hat role as well as of their neutrality

    WHAT SHOULD AI KNOW? INFORMATION DISCLOSURE IN HUMAN-AI COLLABORATION

    Get PDF
    AI-assisted Design Thinking shows great potential for supporting collaborative creative work. To foster creative thinking processes within teams with individualized suggestions, AI has to rely on data provided by the teams. As a prerequisite, team members need to weigh their disclosure preferences against the potential benefits of AI when disclosing information. To shed light on these decisions, we identify relevant information such as emotional states or discussion arguments that design thinking teams could provide to AI to enjoy the benefits of its support. Using the privacy calculus as theoretical lens, we draft a research design to analyze user preferences for disclosing different information relevant to the service bundles that AI provides for respective information. We make explorative contributions to the body of knowledge in terms of AI use and its corresponding information disclosure. The findings are relevant for practice as they guide the design of AI that fosters information disclosure

    Augmented Facilitation: Designing a multi-modal Conversational Agent for Group Ideation

    Get PDF
    Human facilitators face the challenge to structure and collect relevant insights from collaborative creative work sessions, which can suffer if they face a high workload. Hence, for effective value co-creation in organizational ideation we suggest an facilitation augmentation with a conversational agent (CA). CAs have the ability to support respective collaborative work by documenting and analyzing unstructured data. Following the design science research paradigm, and based on the literature about facilitation and human-AI collaboration, we derive design principles to develop a CA prototype that collects ideas from a group ideation session and displays them back in a structured (multi-modal) manner. We evaluate the CA by conducting four focus groups. Key findings show that the CA successfully distills and enriches information. Our study contributes to understanding the role of CA in augmenting facilitation and it provides guidance for practice on how to integrate these technologies in group meetings

    A User-centric Taxonomy for Conversational Generative Language Models

    Get PDF
    Conversational generative language models (GLMs) like ChatGPT are being rapidly adopted. Previous research on non-conversational GLMs showed that formulating prompts is critical for receiving good outputs. However, it is unclear how conversational GLMs are used when solving complex problems that require multi-step interactions. This paper addresses this research gap based on findings from a large participant event we conducted, where ChatGPT was iteratively and in a multi-step manner used while solving a complex problem. We derived a taxonomy of prompting behavior employed for solving complex problems as well as archetypes. While the taxonomy provides common knowledge on GLMs usage based on analyzed input-prompts, the different archetypes facilitate the classification of operators according to their usage. With both we provide exploratory knowledge and a foundation for design science research endeavors, which can be referred to, enabling further research and development of prompt engineering, prompting tactics, and prompting strategies on common ground
    corecore